Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Graph convolutional network method based on hybrid feature modeling
Zhuoran LI, Zhonglin YE, Haixing ZHAO, Jingjing LIN
Journal of Computer Applications    2022, 42 (11): 3354-3363.   DOI: 10.11772/j.issn.1001-9081.2021111981
Abstract499)   HTML15)    PDF (3410KB)(146)       Save

For the complex information contained in the network, more ways are needed to extract useful information from it, but the relevant characteristics in the network cannot be completely described by the existing single?feature Graph Neural Network (GNN). To resolve the above problems, a Hybrid feature?based Dual Graph Convolutional Network (HDGCN) was proposed. Firstly, the structure feature vectors and semantic feature vectors of nodes were obtained by Graph Convolutional Network (GCN). Secondly, the features of nodes were aggregated selectively so that the feature expression ability of nodes was enhanced by the aggregation function based on attention mechanism or gating mechanism. Finally, the hybrid feature vectors of nodes were gained by the fusion mechanism based on a feasible dual?channel GCN, and the structure features and semantic features of nodes were modeled jointly to make the features be supplement for each other and promote the method's performance on subsequent machine learning tasks. Verification was performed on the datasets CiteSeer, DBLP (DataBase systems and Logic Programming) and SDBLP (Simplified DataBase systems and Logic Programming). Experimental results show that compared with the graph convolutional network model based on structure feature training, the dual channel graph convolutional network model based on hybrid feature training has the average value of Micro?F1 increased by 2.43, 2.14, 1.86 and 2.13 percentage points respectively, and the average value of Macro?F1 increased by 1.38, 0.33, 1.06 and 0.86 percentage points respectively when the training set proportion is 20%, 40%, 60% and 80%. The difference in accuracy is no more than 0.5 percentage points when using concat or mean as the fusion strategy, which shows that both concat and mean can be used as the fusion strategy. HDGCN has higher accuracy on node classification and clustering tasks than models trained by structure or semantic network alone, and has the best results when the output dimension is 64, the learning rate is 0.001, the graph convolutional layer number is 2 and the attention vector dimension is 128.

Table and Figures | Reference | Related Articles | Metrics